DenseNet-DC: Optimizing DenseNet Parameters Through Feature Map Generation Control
نویسندگان
چکیده
منابع مشابه
Log-DenseNet: How to Sparsify a DenseNet
Skip connections are increasingly utilized by deep neural networks to improve accuracy and cost-efficiency. In particular, the recent DenseNet is efficient in computation and parameters, and achieves state-of-the-art predictions by directly connecting each feature layer to all previous ones. However, DenseNet’s extreme connectivity pattern may hinder its scalability to high depths, and in appli...
متن کاملJPEG Steganalysis Based on DenseNet
Current research has indicated that convolution neural networks (CNNs) can work well for steganalysis in the spatial domain. There are, however, fewer works based on CNN in the JPEG domain. In this paper, we have proposed a 32layer CNN architecture that is based on Dense Convolutional Network (DenseNet) for JPEG steganalysis. The proposed CNN architecture can reuse features by concatenating fea...
متن کاملDenseNet: Implementing Efficient ConvNet Descriptor Pyramids
Convolutional Neural Networks (CNNs) can provide accurate object classification. They can be extended to perform object detection by iterating over dense or selected proposed object regions. However, the runtime of such detectors scales as the total number and/or area of regions to examine per image, and training such detectors may be prohibitively slow. However, for some CNN classifier topolog...
متن کاملCondenseNet: An Efficient DenseNet using Learned Group Convolutions
Deep neural networks are increasingly used on mobile devices, where computational resources are limited. In this paper we develop CondenseNet, a novel network architecture with unprecedented efficiency. It combines dense connectivity between layers with a mechanism to remove unused connections. The dense connectivity facilitates feature re-use in the network, whereas learned group convolutions ...
متن کاملMemory-Efficient Implementation of DenseNets
The DenseNet architecture [9] is highly computationally efficient as a result of feature reuse. However, a naïve DenseNet implementation can require a significant amount of GPU memory: If not properly managed, pre-activation batch normalization [7] and contiguous convolution operations can produce feature maps that grow quadratically with network depth. In this technical report, we introduce st...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Revista de Informática Teórica e Aplicada
سال: 2020
ISSN: 2175-2745,0103-4308
DOI: 10.22456/2175-2745.98369